relevance weight
- Europe > Austria (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > Wales > Cardiff (0.04)
- (2 more...)
- Europe > Austria (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > Wales > Cardiff (0.04)
- (2 more...)
Learning from Few Demonstrations with Frame-Weighted Motion Generation
Sun, Jianyong, Kober, Jens, Gienger, Michael, Zhu, Jihong
Learning from Demonstration (LfD) enables robots to acquire versatile skills by learning motion policies from human demonstrations. It endows users with an intuitive interface to transfer new skills to robots without the need for time-consuming robot programming and inefficient solution exploration. During task executions, the robot motion is usually influenced by constraints imposed by environments. In light of this, task-parameterized LfD (TP-LfD) encodes relevant contextual information into reference frames, enabling better skill generalization to new situations. However, most TP-LfD algorithms typically require multiple demonstrations across various environmental conditions to ensure sufficient statistics for a meaningful model. It is not a trivial task for robot users to create different situations and perform demonstrations under all of them. Therefore, this paper presents a novel algorithm to learn skills from few demonstrations. By leveraging the reference frame weights that capture the frame importance or relevance during task executions, our method demonstrates excellent skill acquisition performance, which is validated in real robotic environments.
- Europe > United Kingdom (0.14)
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Germany (0.04)
CaseEncoder: A Knowledge-enhanced Pre-trained Model for Legal Case Encoding
Ma, Yixiao, Wu, Yueyue, Su, Weihang, Ai, Qingyao, Liu, Yiqun
Legal case retrieval is a critical process for modern legal information systems. While recent studies have utilized pre-trained language models (PLMs) based on the general domain self-supervised pre-training paradigm to build models for legal case retrieval, there are limitations in using general domain PLMs as backbones. Specifically, these models may not fully capture the underlying legal features in legal case documents. To address this issue, we propose CaseEncoder, a legal document encoder that leverages fine-grained legal knowledge in both the data sampling and pre-training phases. In the data sampling phase, we enhance the quality of the training data by utilizing fine-grained law article information to guide the selection of positive and negative examples. In the pre-training phase, we design legal-specific pre-training tasks that align with the judging criteria of relevant legal cases. Based on these tasks, we introduce an innovative loss function called Biased Circle Loss to enhance the model's ability to recognize case relevance in fine grains. Experimental results on multiple benchmarks demonstrate that CaseEncoder significantly outperforms both existing general pre-training models and legal-specific pre-training models in zero-shot legal case retrieval.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > District of Columbia > Washington (0.05)
DisCERN:Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods
Wiratunga, Nirmalie, Wijekoon, Anjana, Nkisi-Orji, Ikechukwu, Martin, Kyle, Palihawadana, Chamath, Corsar, David
Counterfactual explanations focus on "actionable knowledge" to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factual explainer. Here counterfactuals are formed by replacing feature values from a nearest unlike neighbour (NUN) until an actionable change is observed. We show how widely adopted feature relevance-based explainers (i.e. LIME, SHAP), can inform DisCERN to identify the minimum subset of "actionable features". We demonstrate our DisCERN algorithm on five datasets in a comparative study with the widely used optimisation-based counterfactual approach DiCE. Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Case-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Memory-Based Learning (0.91)
Machine-Learning media bias
D'Alonzo, Samantha, Tegmark, Max
We present an automated method for measuring media bias. Inferring which newspaper published a given article, based only on the frequencies with which it uses different phrases, leads to a conditional probability distribution whose analysis lets us automatically map newspapers and phrases into a bias space. By analyzing roughly a million articles from roughly a hundred newspapers for bias in dozens of news topics, our method maps newspapers into a two-dimensional bias landscape that agrees well with previous bias classifications based on human judgement. One dimension can be interpreted as traditional left-right bias, the other as establishment bias. This means that although news bias is inherently political, its measurement need not be.
Pattern Ensembling for Spatial Trajectory Reconstruction
Pathak, Shivam, He, Mingyi, Malinchik, Sergey, Sobolevsky, Stanislav
Digital sensing provides an unprecedented opportunity to assess and understand mobility. However, incompleteness, missing information, possible inaccuracies, and temporal heterogeneity in the geolocation data can undermine its applicability. As mobility patterns are often repeated, we propose a method to use similar trajectory patterns from the local vicinity and probabilistically ensemble them to robustly reconstruct missing or unreliable observations. We evaluate the proposed approach in comparison with traditional functional trajectory interpolation using a case of sea vessel trajectory data provided by The Automatic Identification System (AIS). By effectively leveraging the similarities in real-world trajectories, our pattern ensembling method helps to reconstruct missing trajectory segments of extended length and complex geometry. It can be used for locating mobile objects when temporary unobserved as well as for creating an evenly sampled trajectory interpolation useful for further trajectory mining.
- North America > United States > Iowa (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Camden County > Cherry Hill (0.04)
- (2 more...)
- Transportation (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Gathering Cyber Threat Intelligence from Twitter Using Novelty Classification
Le, Ba Dung, Wang, Guanhua, Nasim, Mehwish, Babar, Ali
Preventing organizations from Cyber exploits needs timely intelligence about Cyber vulnerabilities and attacks, referred as threats. Cyber threat intelligence can be extracted from various sources including social media platforms where users publish the threat information in real time. Gathering Cyber threat intelligence from social media sites is a time consuming task for security analysts that can delay timely response to emerging Cyber threats. We propose a framework for automatically gathering Cyber threat intelligence from Twitter by using a novelty detection model. Our model learns the features of Cyber threat intelligence from the threat descriptions published in public repositories such as Common Vulnerabilities and Exposures (CVE) and classifies a new unseen tweet as either normal or anomalous to Cyber threat intelligence. We evaluate our framework using a purpose-built data set of tweets from 50 influential Cyber security related accounts over twelve months (in 2018). Our classifier achieves the F1-score of 0.643 for classifying Cyber threat tweets and outperforms several baselines including binary classification models. Our analysis of the classification results suggests that Cyber threat relevant tweets on Twitter do not often include the CVE identifier of the related threats. Hence, it would be valuable to collect these tweets and associate them with the related CVE identifier for cyber security applications.
- Oceania > Australia > South Australia > Adelaide (0.05)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.69)
Fuzzy clustering of distribution-valued data using adaptive L2 Wasserstein distances
Irpino, Antonio, De Carvalho, Francisco, Verde, Rosanna
Distributional (or distribution-valued) data are a new type of data arising from several sources and are considered as realizations of distributional variables. A new set of fuzzy c-means algorithms for data described by distributional variables is proposed. The algorithms use the $L2$ Wasserstein distance between distributions as dissimilarity measures. Beside the extension of the fuzzy c-means algorithm for distributional data, and considering a decomposition of the squared $L2$ Wasserstein distance, we propose a set of algorithms using different automatic way to compute the weights associated with the variables as well as with their components, globally or cluster-wise. The relevance weights are computed in the clustering process introducing product-to-one constraints. The relevance weights induce adaptive distances expressing the importance of each variable or of each component in the clustering process, acting also as a variable selection method in clustering. We have tested the proposed algorithms on artificial and real-world data. Results confirm that the proposed methods are able to better take into account the cluster structure of the data with respect to the standard fuzzy c-means, with non-adaptive distances.
- Asia > Azerbaijan (0.04)
- South America > Brazil > Pernambuco > Recife (0.04)
- Oceania > Solomon Islands (0.04)
- (43 more...)
Automatic Relevance Determination in Nonnegative Matrix Factorization with the \beta-Divergence
Tan, Vincent Y. F., Févotte, Cédric
This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the \beta-divergence. The \beta-divergence is a family of cost functions that includes the squared Euclidean distance, Kullback-Leibler and Itakura-Saito divergences as special cases. Learning the model order is important as it is necessary to strike the right balance between data fidelity and overfitting. We propose a Bayesian model based on automatic relevance determination in which the columns of the dictionary matrix and the rows of the activation matrix are tied together through a common scale parameter in their prior. A family of majorization-minimization algorithms is proposed for maximum a posteriori (MAP) estimation. A subset of scale parameters is driven to a small lower bound in the course of inference, with the effect of pruning the corresponding spurious components. We demonstrate the efficacy and robustness of our algorithms by performing extensive experiments on synthetic data, the swimmer dataset, a music decomposition example and a stock price prediction task.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)